Goto

Collaborating Authors

 sensitivity analysis method


Sensitivity Analysis to Unobserved Confounding with Copula-based Normalizing Flows

Balgi, Sourabh, Braun, Marc, Peña, Jose M., Daoud, Adel

arXiv.org Machine Learning

We propose a novel method for sensitivity analysis to unobserved confounding in causal inference. The method builds on a copula-based causal graphical normalizing flow that we term $ρ$-GNF, where $ρ\in [-1,+1]$ is the sensitivity parameter. The parameter represents the non-causal association between exposure and outcome due to unobserved confounding, which is modeled as a Gaussian copula. In other words, the $ρ$-GNF enables scholars to estimate the average causal effect (ACE) as a function of $ρ$, accounting for various confounding strengths. The output of the $ρ$-GNF is what we term the $ρ_{curve}$, which provides the bounds for the ACE given an interval of assumed $ρ$ values. The $ρ_{curve}$ also enables scholars to identify the confounding strength required to nullify the ACE. We also propose a Bayesian version of our sensitivity analysis method. Assuming a prior over the sensitivity parameter $ρ$ enables us to derive the posterior distribution over the ACE, which enables us to derive credible intervals. Finally, leveraging on experiments from simulated and real-world data, we show the benefits of our sensitivity analysis method.


Bounds and Sensitivity Analysis of the Causal Effect Under Outcome-Independent MNAR Confounding

Peña, Jose M.

arXiv.org Machine Learning

We report assumption-free bounds for any contrast between the probabilities of the potential outcome under exposure and non-exposure when the confounders are missing not at random. We assume that the missingness mechanism is outcome-independent. We also report a sensitivity analysis method to complement our bounds.


Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models

Wang, Zhengguang

arXiv.org Artificial Intelligence

This work undertakes studies to evaluate Interpretability Methods for Time-Series Deep Learning. Sensitivity analysis assesses how input changes affect the output, constituting a key component of interpretation. Among the post-hoc interpretation methods such as back-propagation, perturbation, and approximation, my work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances. Specifically, my work answers three research questions: 1) Do different sensitivity analysis (SA) methods yield comparable outputs and attribute importance rankings? 2) Using the same sensitivity analysis method, do different Deep Learning (DL) models impact the output of the sensitivity analysis? 3) How well do the results from sensitivity analysis methods align with the ground truth?


Assessing Ranking and Effectiveness of Evolutionary Algorithm Hyperparameters Using Global Sensitivity Analysis Methodologies

Ojha, Varun, Timmis, Jon, Nicosia, Giuseppe

arXiv.org Artificial Intelligence

We present a comprehensive global sensitivity analysis of two single-objective and two multi-objective state-of-the-art global optimization evolutionary algorithms as an algorithm configuration problem. That is, we investigate the quality of influence hyperparameters have on the performance of algorithms in terms of their direct effect and interaction effect with other hyperparameters. Using three sensitivity analysis methods, Morris LHS, Morris, and Sobol, to systematically analyze tunable hyperparameters of covariance matrix adaptation evolutionary strategy, differential evolution, non-dominated sorting genetic algorithm III, and multi-objective evolutionary algorithm based on decomposition, the framework reveals the behaviors of hyperparameters to sampling methods and performance metrics. That is, it answers questions like what hyperparameters influence patterns, how they interact, how much they interact, and how much their direct influence is. Consequently, the ranking of hyperparameters suggests their order of tuning, and the pattern of influence reveals the stability of the algorithms.


Sensitivity Analysis for Computationally Expensive Models using Optimization and Objective-oriented Surrogate Approximations

Wang, Yilun, Shoemaker, Christine A.

arXiv.org Machine Learning

In this paper, we focus on developing efficient sensitivity analysis methods for a computationally expensive objective function $f(x)$ in the case that the minimization of it has just been performed. Here "computationally expensive" means that each of its evaluation takes significant amount of time, and therefore our main goal to use a small number of function evaluations of $f(x)$ to further infer the sensitivity information of these different parameters. Correspondingly, we consider the optimization procedure as an adaptive experimental design and re-use its available function evaluations as the initial design points to establish a surrogate model $s(x)$ (or called response surface). The sensitivity analysis is performed on $s(x)$, which is an lieu of $f(x)$. Furthermore, we propose a new local multivariate sensitivity measure, for example, around the optimal solution, for high dimensional problems. Then a corresponding "objective-oriented experimental design" is proposed in order to make the generated surrogate $s(x)$ better suitable for the accurate calculation of the proposed specific local sensitivity quantities. In addition, we demonstrate the better performance of the Gaussian radial basis function interpolator over Kriging in our cases, which are of relatively high dimensionality and few experimental design points. Numerical experiments demonstrate that the optimization procedure and the "objective-oriented experimental design" behavior much better than the classical Latin Hypercube Design. In addition, the performance of Kriging is not as good as Gaussian RBF, especially in the case of high dimensional problems.